Using KL Divergence for Credibility Assessment

نویسندگان

  • Thibaut Vallée
  • Grégory Bonnet
چکیده

In reputation systems, agents collectively estimate the others’ behaviours through feedbacks to decide with whom they can interact. To avoid manipulations, most reputation systems weight feedbacks with respect to the agents’ reputation. However, these systems are sensitive to some strategic manipulations, like oscillating attacks or whitewashing. In this paper, we propose (1) a credibility measure of feedbacks based on the Kullback-Leibler divergence to detect malicious behaviours and (2) filtering functions to enhance already known reputation functions.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

KL realignment for speaker diarization with multiple feature streams

This paper aims at investigating the use of Kullback-Leibler (KL) divergence based realignment with application to speaker diarization. The use of KL divergence based realignment operates directly on the speaker posterior distribution estimates and is compared with traditional realignment performed using HMM/GMM system. We hypothesize that using posterior estimates to re-align speaker boundarie...

متن کامل

Notes on Kullback-Leibler Divergence and Likelihood

The Kullback-Leibler (KL) divergence is a fundamental equation of information theory that quantifies the proximity of two probability distributions. Although difficult to understand by examining the equation, an intuition and understanding of the KL divergence arises from its intimate relationship with likelihood theory. We discuss how KL divergence arises from likelihood theory in an attempt t...

متن کامل

Stochastic Mirror Descent with Inexact Prox - Mapping in Density

Appendix A Strong convexity As we discussed, the posterior from Bayes’s rule could be viewed as the optimal of an optimization problem in Eq (1). We will show that the objective function is strongly convex w.r.t KL-divergence. Proof for Lemma 1. The lemma directly results from the generalized Pythagaras theorem for Bregman divergence. Particularly, for KL-divergence, we have KL(q 1 ||q) = KL(q ...

متن کامل

Optimism in Reinforcement Learning Based on Kullback-Leibler Divergence

We consider model-based reinforcement learning in finite Markov Decision Processes (MDPs), focussing on so-called optimistic strategies. Optimism is usually implemented by carrying out extended value iterations, under a constraint of consistency with the estimated model transition probabilities. In this paper, we strongly argue in favor of using the Kullback-Leibler (KL) divergence for this pur...

متن کامل

On w-mixtures: Finite convex combinations of prescribed component distributions

We consider the space of w-mixtures that are the set of finite statistical mixtures sharing the same prescribed component distributions. The geometry induced by the Kullback-Leibler (KL) divergence on this family of w-mixtures is a dually flat space in information geometry called the mixture family manifold. It follows that the KL divergence between two w-mixtures is equivalent to a Bregman Div...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015